skip to main content


Search for: All records

Creators/Authors contains: "Tomesh, Teague"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Cloud-based quantum computers have become a re- ality with a number of companies allowing for cloud-based access to their machines with tens to more than 100 qubits. With easy access to quantum computers, quantum information processing will potentially revolutionize computation, and superconducting transmon-based quantum computers are among some of the more promising devices available. Cloud service providers today host a variety of these and other prototype quantum computers with highly diverse device properties, sizes, and performances. The variation that exists in today’s quantum computers, even among those of the same underlying hardware, motivate the study of how one device can be clearly differentiated and identified from the next. As a case study, this work focuses on the properties of 25 IBM superconducting, fixed-frequency transmon-based quantum computers that range in age from a few months to approximately 2.5 years. Through the analysis of current and historical quantum computer calibration data, this work uncovers key features within the machines, primarily frequency characteristics of transmon qubits, that can serve as a basis for a unique hardware fingerprint of each quantum computer. 
    more » « less
    Free, publicly-accessible full text available October 28, 2024
  2. Recent work has proposed and explored using coreset techniques for quantum algorithms that operate on classical data sets to accelerate the applicability of these algorithms on near-term quantum devices. We apply these ideas to Quantum Boltzmann Machines (QBM) where gradient-based steps which require Gibbs state sampling are the main computational bottle-neck during training. By using a coreset in place of the full data set, we try to minimize the number of steps needed and accelerate the overall training time. In a regime where computational time on quantum computers is a precious resource, we propose this might lead to substantial practical savings. We evaluate this approach on 6x6 binary images from an augmented bars and stripes data set using a QBM with 36 visible units and 8 hidden units. Using an Inception score inspired metric, we compare QBM training times with and without using coresets. 
    more » « less
  3. Simulating the time evolution of a physical system at quantum mechanical levels of detail - known as Hamiltonian Simulation (HS) - is an important and interesting problem across physics and chemistry. For this task, algorithms that run on quantum computers are known to be exponentially faster than classical algorithms; in fact, this application motivated Feynman to propose the construction of quantum computers. Nonetheless, there are challenges in reaching this performance potential. Prior work has focused on compiling circuits (quantum programs) for HS with the goal of maximizing either accuracy or gate cancellation. Our work proposes a compilation strategy that simultaneously advances both goals. At a high level, we use classical optimizations such as graph coloring and travelling salesperson to order the execution of quantum programs. Specifically, we group together mutually commuting terms in the Hamiltonian (a matrix characterizing the quantum mechanical system) to improve the accuracy of the simulation. We then rearrange the terms within each group to maximize gate cancellation in the final quantum circuit. These optimizations work together to improve HS performance and result in an average 40% reduction in circuit depth. This work advances the frontier of HS which in turn can advance physical and chemical modeling in both basic and applied sciences. 
    more » « less
  4. null (Ed.)
    Many quantum algorithms for machine learning require access to classical data in superposition. However, for many natural data sets and algorithms, the overhead required to load the data set in superposition can erase any potential quantum speedup over classical algorithms. Recent work by Harrow introduces a new paradigm in hybrid quantum-classical computing to address this issue, relying on coresets to minimize the data loading overhead of quantum algorithms. We investigated using this paradigm to perform k-means clustering on near-term quantum computers, by casting it as a QAOA optimization instance over a small coreset. We used numerical simulations to compare the performance of this approach to classical k-means clustering. We were able to find data sets with which coresets work well relative to random sampling and where QAOA could potentially outperform standard k-means on a coreset. However, finding data sets where both coresets and QAOA work well—which is necessary for a quantum advantage over k-means on the entire data set—appears to be challenging. 
    more » « less
  5. Quantum computing (QC) is a new paradigm offering the potential of exponential speedups over classical computing for certain computational problems. Each additional qubit doubles the size of the computational state space available to a QC algorithm. This exponential scaling underlies QC’s power, but today’s Noisy Intermediate-Scale Quantum (NISQ) devices face significant engineering challenges in scalability. The set of quantum circuits that can be reliably run on NISQ devices is limited by their noisy operations and low qubit counts. This paper introduces CutQC, a scalable hybrid computing approach that combines classical computers and quantum computers to enable evaluation of quantum circuits that cannot be run on classical or quantum computers alone. CutQC cuts large quantum circuits into smaller subcircuits, allowing them to be executed on smaller quantum devices. Classical postprocessing can then reconstruct the output of the original circuit. This approach offers significant runtime speedup compared with the only viable current alternative -- purely classical simulations -- and demonstrates evaluation of quantum circuits that are larger than the limit of QC or classical simulation. Furthermore, in real-system runs, CutQC achieves much higher quantum circuit evaluation fidelity using small prototype quantum computers than the state-of-the-art large NISQ devices achieve. Overall, this hybrid approach allows users to leverage classical and quantum computing resources to evaluate quantum programs far beyond the reach of either one alone. 
    more » « less
  6. null (Ed.)
    Variational quantum eigensolver (VQE) is a promising algorithm suitable for near-term quantum computers. VQE aims to approximate solutions to exponentially-sized optimization problems by executing a polynomial number of quantum subproblems. However, the number of subproblems scales as N 4 for typical problems of interest-a daunting growth rate that poses a serious limitation for emerging applications such as quantum computational chemistry. We mitigate this issue by exploiting the simultaneous measurability of subproblems corresponding to commuting terms. Our technique transpiles VQE instances into a format optimized for simultaneous measurement, ultimately yielding 8-30x lower cost. Our work also encompasses a synthesis tool for compiling simultaneous measurement circuits with minimal overhead. We demonstrate experimental validation of our techniques by estimating the ground state energy of deuteron with a quantum computer. We also investigate the underlying statistics of simultaneous measurement and devise an adaptive strategy for mitigating harmful covariance terms. 
    more » « less
  7. null (Ed.)